Goto

Collaborating Authors

 semantic interpretation


Pragmatic inference of scalar implicature by LLMs

Cho, Ye-eun, Kim, Seong mook

arXiv.org Artificial Intelligence

This study investigates how Large Language Models (LLMs), particularly BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019), engage in pragmatic inference of scalar implicature, such as some. Two sets of experiments were conducted using cosine similarity and next sentence/token prediction as experimental methods. The results in experiment 1 showed that, both models interpret some as pragmatic implicature not all in the absence of context, aligning with human language processing. In experiment 2, in which Question Under Discussion (QUD) was presented as a contextual cue, BERT showed consistent performance regardless of types of QUDs, while GPT-2 encountered processing difficulties since a certain type of QUD required pragmatic inference for implicature. The findings revealed that, in terms of theoretical approaches, BERT inherently incorporates pragmatic implicature not all within the term some, adhering to Default model (Levinson, 2000). In contrast, GPT-2 seems to encounter processing difficulties in inferring pragmatic implicature within context, consistent with Context-driven model (Sperber and Wilson, 2002).


Superlatives in Context: Explicit and Implicit Domain Restrictions for Superlative Frames

Pyatkin, Valentina, Webber, Bonnie, Dagan, Ido, Tsarfaty, Reut

arXiv.org Artificial Intelligence

Superlatives are used to single out elements with a maximal/minimal property. Semantically, superlatives perform a set comparison: something (or some things) has the min/max property out of a set. As such, superlatives provide an ideal phenomenon for studying implicit phenomena and discourse restrictions. While this comparison set is often not explicitly defined, its (implicit) restrictions can be inferred from the discourse context the expression appears in. In this work we provide an extensive computational study on the semantics of superlatives. We propose a unified account of superlative semantics which allows us to derive a broad-coverage annotation schema. Using this unified schema we annotated a multi-domain dataset of superlatives and their semantic interpretations. We specifically focus on interpreting implicit or ambiguous superlative expressions, by analyzing how the discourse context restricts the set of interpretations. In a set of experiments we then analyze how well models perform at variations of predicting superlative semantics, with and without context. We show that the fine-grained semantics of superlatives in context can be challenging for contemporary models, including GPT-4.


Incremental Comprehension of Garden-Path Sentences by Large Language Models: Semantic Interpretation, Syntactic Re-Analysis, and Attention

Li, Andrew, Feng, Xianle, Narang, Siddhant, Peng, Austin, Cai, Tianle, Shah, Raj Sanjay, Varma, Sashank

arXiv.org Artificial Intelligence

When reading temporarily ambiguous garden-path sentences, misinterpretations sometimes linger past the point of disambiguation. This phenomenon has traditionally been studied in psycholinguistic experiments using online measures such as reading times and offline measures such as comprehension questions. Here, we investigate the processing of garden-path sentences and the fate of lingering misinterpretations using four large language models (LLMs): GPT-2, LLaMA-2, Flan-T5, and RoBERTa. The overall goal is to evaluate whether humans and LLMs are aligned in their processing of garden-path sentences and in the lingering misinterpretations past the point of disambiguation, especially when extra-syntactic information (e.g., a comma delimiting a clause boundary) is present to guide processing. We address this goal using 24 garden-path sentences that have optional transitive and reflexive verbs leading to temporary ambiguities. For each sentence, there are a pair of comprehension questions corresponding to the misinterpretation and the correct interpretation. In three experiments, we (1) measure the dynamic semantic interpretations of LLMs using the question-answering task; (2) track whether these models shift their implicit parse tree at the point of disambiguation (or by the end of the sentence); and (3) visualize the model components that attend to disambiguating information when processing the question probes. These experiments show promising alignment between humans and LLMs in the processing of garden-path sentences, especially when extra-syntactic information is available to guide processing.


AS-XAI: Self-supervised Automatic Semantic Interpretation for CNN

Sun, Changqi, Xu, Hao, Chen, Yuntian, Zhang, Dongxiao

arXiv.org Artificial Intelligence

Explainable artificial intelligence (XAI) aims to develop transparent explanatory approaches for "black-box" deep learning models. However,it remains difficult for existing methods to achieve the trade-off of the three key criteria in interpretability, namely, reliability, causality, and usability, which hinder their practical applications. In this paper, we propose a self-supervised automatic semantic interpretable explainable artificial intelligence (AS-XAI) framework, which utilizes transparent orthogonal embedding semantic extraction spaces and row-centered principal component analysis (PCA) for global semantic interpretation of model decisions in the absence of human interference, without additional computational costs. In addition, the invariance of filter feature high-rank decomposition is used to evaluate model sensitivity to different semantic concepts. Extensive experiments demonstrate that robust and orthogonal semantic spaces can be automatically extracted by AS-XAI, providing more effective global interpretability for convolutional neural networks (CNNs) and generating human-comprehensible explanations. The proposed approach offers broad fine-grained extensible practical applications, including shared semantic interpretation under out-of-distribution (OOD) categories, auxiliary explanations for species that are challenging to distinguish, and classification explanations from various perspectives.


Natural Language Processing: The Technology That's Biased

#artificialintelligence

Natural Language Processing (NLP) refers to building machines that can understand and respond to voice data with their own text and speech. Natural Language Processing falls under the umbrella of Artificial Intelligence (AI) and recent models like the Bidirectional Encoder Representations from Transformers (BERT), Generative Pre-Trained Transformer 3 (GPT-3) and Pathways AI Language Models (PaLM) have made accurate human-machine communication possible. These Large language Models (LLMs) are trained on massive volumes of text with billions of parameters and are able to understand and answer reading comprehension questions as well as generating new text such as a summary. Put simply, LLMs are trained to predict the next words in a sentence, such as by extending the autocomplete feature in messaging applications. But they can do much more, for example question answering, translation, image captioning, human-level dialogue agents, entity linking, or even data cleaning (for mixes of structured and unstructured data). NLP is already being used to automate some human tasks (RPA – robotic process automation), however the breath-taking advances in the last 3 years, NLP open new potential for businesses to digitize company knowledge and disrupting incumbent business models.


Teaching cars to drive with foresight: Self-learning process

#artificialintelligence

An empty street, a row of parked cars at the side: nothing to indicate that you should be careful. But wait: Isn't there a side street up ahead, half covered by the parked cars? Maybe I better take my foot off the gas -- who knows if someone's coming from the side. We constantly encounter situations like these when driving. Interpreting them correctly and drawing the right conclusions requires a lot of experience. In contrast, self-driving cars sometimes behave like a learner driver in his first lesson.


Learning From Unannotated QA Pairs to Analogically Disambiguate and Answer Questions

Crouse, Maxwell (Northwestern University) | McFate, Clifton (Northwestern University) | Forbus, Kenneth (Northwestern University)

AAAI Conferences

Creating systems that can learn to answer natural language questions has been a longstanding challenge for artificial intelligence. Most prior approaches focused on producing a specialized language system for a particular domain and dataset, and they required training on a large corpus manually annotated with logical forms. This paper introduces an analogy-based approach that instead adapts an existing general purpose semantic parser to answer questions in a novel domain by jointly learning disambiguation heuristics and query construction templates from purely textual question-answer pairs. Our technique uses possible semantic interpretations of the natural language questions and answers to constrain a query-generation procedure, producing cases during training that are subsequently reused via analogical retrieval and composed to answer test questions. Bootstrapping an existing semantic parser in this way significantly reduces the number of training examples needed to accurately answer questions. We demonstrate the efficacy of our technique using the Geoquery corpus, on which it approaches state of the art performance using 10-fold cross validation, shows little decrease in performance with 2-folds, and achieves above 50% accuracy with as few as 10 examples.


Corpus-Based Approaches to Semantic Interpretation in Natural Language Processing

AI Magazine

In recent years, there has been a flurry of research into empirical, corpus-based learning approaches to natural language processing (NLP). Most empirical NLP work to date has focused on relatively low-level language processing such as part-ofspeech tagging, text segmentation, and syntactic parsing. The success of these approaches has stimulated research in using empirical learning techniques in other facets of NLP, including semantic analysis--uncovering the meaning of an utterance. This article is an introduction to some of the emerging research in the application of corpusbased learning techniques to problems in semantic interpretation. In particular, we focus on two important problems in semantic interpretation, namely, word-sense disambiguation and semantic parsing.


Semantic Interpretation of Social Network Communities

Maheshwari, Tushar (Indian Institute of Information Technology - Chittoor) | Reganti, Aishwarya N. (Indian Institute of Information Technology - Chittoor) | Kumar, Upendra (Indian Institute of Information Technology - Chittoor) | Chakraborty, Tanmoy (University of Maryland, College Park) | Das, Amitava (Indian Institute of Information Technology - Chittoor)

AAAI Conferences

A community in a social network is considered to be a group of nodes densely connected internally and sparsely connected externally.Although previous work intensely studied network topology within a community, its semantic interpretation is hardly understood. In this paper, we attempt to understand whether individuals in a community possess similar Personalities, Values and Ethical background. Finally, we show that Personality and Values models could be used as features to discover more accurate community structure compared to the one obtained from only network information.